培训数据集和生产中遇到的数据之间的分配差距得到了很好的承认。训练数据集通常在固定的时间段内构建,并且通过仔细策划要标记的数据。因此,训练数据集可能不包含在现实世界生产环境中可能遇到的所有可能的数据变化。任务建立实体解析系统 - 一个识别和整合代表同一个人的数据点的模型 - 我们的第一款模型表现出明确的培训 - 生产性能差距。在这种情况下,我们讨论了我们的循环启用,以数据为中心的解决方案,以关闭培训 - 生产性能分歧。我们的结论是用外卖,适用于以数据为中心的学习。
translated by 谷歌翻译
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
translated by 谷歌翻译
Generative adversarial networks are a promising tool for image generation in the astronomy domain. Of particular interest are conditional generative adversarial networks (cGANs), which allow you to divide images into several classes according to the value of some property of the image, and then specify the required class when generating new images. In the case of images from Imaging Atmospheric Cherenkov Telescopes (IACTs), an important property is the total brightness of all image pixels (image size), which is in direct correlation with the energy of primary particles. We used a cGAN technique to generate images similar to whose obtained in the TAIGA-IACT experiment. As a training set, we used a set of two-dimensional images generated using the TAIGA Monte Carlo simulation software. We artificiallly divided the training set into 10 classes, sorting images by size and defining the boundaries of the classes so that the same number of images fall into each class. These classes were used while training our network. The paper shows that for each class, the size distribution of the generated images is close to normal with the mean value located approximately in the middle of the corresponding class. We also show that for the generated images, the total image size distribution obtained by summing the distributions over all classes is close to the original distribution of the training set. The results obtained will be useful for more accurate generation of realistic synthetic images similar to the ones taken by IACTs.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Developing safe and useful general-purpose AI systems will require us to make progress on scalable oversight: the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. Empirical work on this problem is not straightforward, since we do not yet have systems that broadly exceed our abilities. This paper discusses one of the major ways we think about this problem, with a focus on how to turn it into one that can be productively studied empirically. We first present an experimental design centered on choosing tasks for which human specialists succeed but unaided humans and current general AI systems fail. We then present a proof-of-concept experiment following meant to demonstrate a key feature of this experimental design and show its viability with two question-answering tasks: MMLU and time-limited QuALITY. On these tasks, we find that human participants who interact with an unreliable large-language-model dialog assistant through chat -- a trivial baseline strategy for scalable oversight -- substantially outperform both the model alone and their own unaided performance. These results are an encouraging sign that scalable oversight will be tractable to study with present models and bolster recent findings that large language models can productively assist humans with difficult tasks.
translated by 谷歌翻译
We establish a simple connection between robust and differentially-private algorithms: private mechanisms which perform well with very high probability are automatically robust in the sense that they retain accuracy even if a constant fraction of the samples they receive are adversarially corrupted. Since optimal mechanisms typically achieve these high success probabilities, our results imply that optimal private mechanisms for many basic statistics problems are robust. We investigate the consequences of this observation for both algorithms and computational complexity across different statistical problems. Assuming the Brennan-Bresler secret-leakage planted clique conjecture, we demonstrate a fundamental tradeoff between computational efficiency, privacy leakage, and success probability for sparse mean estimation. Private algorithms which match this tradeoff are not yet known -- we achieve that (up to polylogarithmic factors) in a polynomially-large range of parameters via the Sum-of-Squares method. To establish an information-computation gap for private sparse mean estimation, we also design new (exponential-time) mechanisms using fewer samples than efficient algorithms must use. Finally, we give evidence for privacy-induced information-computation gaps for several other statistics and learning problems, including PAC learning parity functions and estimation of the mean of a multivariate Gaussian.
translated by 谷歌翻译
随着深度学习(DL)的出现,超分辨率(SR)也已成为一个蓬勃发展的研究领域。然而,尽管结果有希望,但该领域仍然面临需要进一步研究的挑战,例如,允许灵活地采样,更有效的损失功能和更好的评估指标。我们根据最近的进步来回顾SR的域,并检查最新模型,例如扩散(DDPM)和基于变压器的SR模型。我们对SR中使用的当代策略进行了批判性讨论,并确定了有前途但未开发的研究方向。我们通过纳入该领域的最新发展,例如不确定性驱动的损失,小波网络,神经体系结构搜索,新颖的归一化方法和最新评估技术来补充先前的调查。我们还为整章中的模型和方法提供了几种可视化,以促进对该领域趋势的全球理解。最终,这篇综述旨在帮助研究人员推动DL应用于SR的界限。
translated by 谷歌翻译
图神经网络(GNN)在各种与图形相关的任务上非常有效。但是,它们缺乏解释性和透明度。当前的解释性方法通常是局部的,将GNN视为黑盒。他们不在模型内部看,抑制了人类对模型和解释的信任。由神经元在视觉模型中检测高级语义概念的能力的动机,我们对单个GNN神经元的行为回答有关GNN可解释性的问题进行了新的分析,并提出了新的指标来评估GNN神经元的可解释性。我们提出了一种新颖的方法,用于使用神经元级概念为GNN产生全球解释,以使从业者能够对模型具有高级的看法。具体而言,(i)据我们所知,这是第一部作品,表明GNN神经元充当概念探测器,并且与表述为节点学位和邻居属性的逻辑组成的概念具有很强的一致性; (ii)我们定量评估检测概念的重要性,并确定训练持续时间和神经元水平的解释性之间的权衡; (iii)我们证明,我们的全球解释性方法比当前的最新方法具有优势 - 我们可以将解释解释为以逻辑描述为支持的单个可解释概念,从而降低了偏见的潜力并提高用户友好性。
translated by 谷歌翻译
联合学习(FL)是以保护隐私方式在异质客户设备上进行机器学习的框架。迄今为止,大多数FL算法都在多个回合中学习一个“全局”服务器模型。在每回合中,相同的服务器模型都向所有参与的客户端广播,在本地更新,然后跨客户端进行汇总。在这项工作中,我们提出了一个更一般的过程,客户“选择”了发送给他们的值的程序。值得注意的是,这使客户可以在较小的数据依赖性切片上操作。为了使这种实用性,我们概述了原始的联合选择,该选择可以在现实的FL系统中进行特定于客户的选择。我们讨论了如何使用联合选择进行模型培训,并表明它可以导致通信和客户记忆使用情况的急剧减少,从而有可能使模型的训练太大而无法适合处个设备。我们还讨论了联邦选择对隐私和信任的含义,这反过来影响了可能的系统约束和设计。最后,我们讨论有关模型体系结构,隐私保护技术和实用FL系统的开放问题。
translated by 谷歌翻译
本文提出了一种估计条件平均治疗效果的新方法。它称为TNW-CATE(可训练的Nadaraya-Watson回归CATE),并且基于以下假设:控制数量相当大,而处理的数量很少。 TNW-CATE使用Nadaraya-Watson回归来预测对照组和治疗组的患者的结果。 TNW-CATE背后的主要思想是通过使用特定形式的重量分享神经网络来训练Nadaraya-Watson回归的内核。该网络在控件上进行了训练,并用一组具有共享参数的神经子网代替标准内核,使每个子网都实现了可训练的内核,但是整个网络都实现了Nadaraya-Watson估计器。网络记住特征向量如何位于特征空间中。当源和目标数据的域相似时,所提出的方法类似于传输学习,但任务不同。各种数值仿真实验说明了TNW-CATE,并将其与众所周知的T-Learner,S-Learner和X-Learner进行比较,以进行几种类型的对照和治疗结果函数。 https://github.com/stasychbr/tnw-cate提供了实施TNW-CATE的算法的代码。
translated by 谷歌翻译